- Twitter announced a new competition for computer researchers and hackers.
- The competition tasks entrants with helping to solve apparent bias in its image-cropping algorithm.
- Winners of the challenge will receive cash rewards of up to $3,500.
- See more stories on Insider's business page.
Twitter has introduced a new competition for researchers and hackers to spot and fix apparent racial and gender bias in its image-cropping algorithm, the company said.
The bug bounty competition is aimed at identifying "potential harms of this algorithm beyond what we identified ourselves," Twitter added in a blog post. The winner will receive a cash rewards of $3,500, while runner-ups will also be financially rewarded.
As Mashable reported, bug bounties are programs companies or other groups have that reward people for finding bugs in their technical infrastructure.
In a recent thread, Twitter explained the challenge, writing: "Calling all bounty hunters – it's officially go time! We've just released the full details of our algorithmic bias bounty challenge which is open through August 6."
"With this challenge we aim to set a precedent at Twitter, and in the industry, for proactive and collective identification of algorithmic harms," they added.
The news comes after a group of researchers found the algorithm favored white people over Black people, and women over men, the company said.
Last year, PhD student Colin Madland highlighted the issue in a tweet about Zoom erasing a Black man's face when he used a virtual background, Insider's Isobel Asher Hamilton reported.
-Colin, scholar in residence since Mar 17/20 (@colinmadland) September 19, 2020
Winner of the new competition will also be invited to present their work at the DEF CON AI Village workshop hosted by Twitter in August. "Successful entries will consider both quantitative and qualitative methods in their approach," the company said.
The judges aiding the company in reviewing entries will include Ariel Herbert-Voss, Matt Mitchell, Peiter "Mudge" Zatko, and Patrick Hall.
Machine learning algorithms like the one used by Twitter rely on vast data sets. If these data sets are weighted in favor of a particular race, gender, or anything else, the resultant algorithm can then reflect that bias.